skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Prater-Bennette, Ashley"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available April 28, 2026
  2. Free, publicly-accessible full text available April 21, 2026
  3. Free, publicly-accessible full text available February 1, 2026
  4. Free, publicly-accessible full text available January 1, 2026
  5. Abstract This paper introduces a nonconvex approach for sparse signal recovery, proposing a novel model termed the$$\tau _2$$ τ 2 -model, which utilizes the squared$$\ell _1/\ell _2$$ 1 / 2 norms for this purpose. Our model offers an advancement over the$$\ell _0$$ 0 norm, which is often computationally intractable and less effective in practical scenarios. Grounded in the concept of effective sparsity, our approach robustly measures the number of significant coordinates in a signal, making it a powerful alternative for sparse signal estimation. The$$\tau _2$$ τ 2 -model is particularly advantageous due to its computational efficiency and practical applicability. We detail two accompanying algorithms based on Dinkelbach’s procedure and a difference of convex functions strategy. The first algorithm views the model as a linear-constrained quadratic programming problem in noiseless scenarios and as a quadratic-constrained quadratic programming problem in noisy scenarios. The second algorithm, capable of handling both noiseless and noisy cases, is based on the alternating direction linearized proximal method of multipliers. We also explore the model’s properties, including the existence of solutions under certain conditions, and discuss the convergence properties of the algorithms. Numerical experiments with various sensing matrices validate the effectiveness of our proposed model. 
    more » « less
  6. Robust Markov decision processes (MDPs) aim to find a policy that optimizes the worst-case performance over an uncertainty set of MDPs. Existing studies mostly have focused on the robust MDPs under the discounted reward criterion, leaving the ones under the average-reward criterion largely unexplored. In this paper, we develop the first comprehensive and systematic study of robust average-reward MDPs, where the goal is to optimize the long-term average performance under the worst case. Our contributions are four-folds: (1) we prove the uniform convergence of the robust discounted value function to the robust average-reward function as the discount factor γ goes to 1; (2) we derive the robust average-reward Bellman equation, characterize the structure of its solution set, and prove the equivalence between solving the robust Bellman equation and finding the optimal robust policy; (3) we design robust dynamic programming algorithms, and theoretically characterize their convergence to the optimal policy; and (4) we design two model-free algorithms unitizing the multi-level Monte-Carlo approach, and prove their asymptotic convergence 
    more » « less
  7. Distributionally robust optimization (DRO) is a powerful framework for training robust models against data distribution shifts. This paper focuses on constrained DRO, which has an explicit characterization of the robustness level. Existing studies on constrained DRO mostly focus on convex loss function, and exclude the practical and challenging case with non-convex loss function, e.g., neural network. This paper develops a stochastic algorithm and its performance analysis for non-convex constrained DRO. The computational complexity of our stochastic algorithm at each iteration is independent of the overall dataset size, and thus is suitable for large-scale applications. We focus on the general Cressie-Read family divergence defined uncertainty set which includes chi^2-divergences as a special case. We prove that our algorithm finds an epsilon-stationary point with an improved computational complexity than existing methods. Our method also applies to the smoothed conditional value at risk (CVaR) DRO. 
    more » « less